311 research outputs found
Hilbert's "Verunglueckter Beweis," the first epsilon theorem, and consistency proofs
In the 1920s, Ackermann and von Neumann, in pursuit of Hilbert's Programme,
were working on consistency proofs for arithmetical systems. One proposed
method of giving such proofs is Hilbert's epsilon-substitution method. There
was, however, a second approach which was not reflected in the publications of
the Hilbert school in the 1920s, and which is a direct precursor of Hilbert's
first epsilon theorem and a certain 'general consistency result' due to
Bernays. An analysis of the form of this so-called 'failed proof' sheds further
light on an interpretation of Hilbert's Programme as an instrumentalist
enterprise with the aim of showing that whenever a `real' proposition can be
proved by 'ideal' means, it can also be proved by 'real', finitary means.Comment: 18 pages, final versio
Basic Logic and Quantum Entanglement
As it is well known, quantum entanglement is one of the most important
features of quantum computing, as it leads to massive quantum parallelism,
hence to exponential computational speed-up. In a sense, quantum entanglement
is considered as an implicit property of quantum computation itself. But...can
it be made explicit? In other words, is it possible to find the connective
"entanglement" in a logical sequent calculus for the machine language? And
also, is it possible to "teach" the quantum computer to "mimic" the EPR
"paradox"? The answer is in the affirmative, if the logical sequent calculus is
that of the weakest possible logic, namely Basic logic. A weak logic has few
structural rules. But in logic, a weak structure leaves more room for
connectives (for example the connective "entanglement"). Furthermore, the
absence in Basic logic of the two structural rules of contraction and weakening
corresponds to the validity of the no-cloning and no-erase theorems,
respectively, in quantum computing.Comment: 10 pages, 1 figure,LaTeX. Shorter version for proceedings
requirements. Contributed paper at DICE2006, Piombino, Ital
Polarizing Double Negation Translations
Double-negation translations are used to encode and decode classical proofs
in intuitionistic logic. We show that, in the cut-free fragment, we can
simplify the translations and introduce fewer negations. To achieve this, we
consider the polarization of the formul{\ae}{} and adapt those translation to
the different connectives and quantifiers. We show that the embedding results
still hold, using a customized version of the focused classical sequent
calculus. We also prove the latter equivalent to more usual versions of the
sequent calculus. This polarization process allows lighter embeddings, and
sheds some light on the relationship between intuitionistic and classical
connectives
Fixed-point elimination in the intuitionistic propositional calculus
It is a consequence of existing literature that least and greatest
fixed-points of monotone polynomials on Heyting algebras-that is, the algebraic
models of the Intuitionistic Propositional Calculus-always exist, even when
these algebras are not complete as lattices. The reason is that these extremal
fixed-points are definable by formulas of the IPC. Consequently, the
-calculus based on intuitionistic logic is trivial, every -formula
being equivalent to a fixed-point free formula. We give in this paper an
axiomatization of least and greatest fixed-points of formulas, and an algorithm
to compute a fixed-point free formula equivalent to a given -formula. The
axiomatization of the greatest fixed-point is simple. The axiomatization of the
least fixed-point is more complex, in particular every monotone formula
converges to its least fixed-point by Kleene's iteration in a finite number of
steps, but there is no uniform upper bound on the number of iterations. We
extract, out of the algorithm, upper bounds for such n, depending on the size
of the formula. For some formulas, we show that these upper bounds are
polynomial and optimal
Integrating a Global Induction Mechanism into a Sequent Calculus
Most interesting proofs in mathematics contain an inductive argument which
requires an extension of the LK-calculus to formalize. The most commonly used
calculi for induction contain a separate rule or axiom which reduces the valid
proof theoretic properties of the calculus. To the best of our knowledge, there
are no such calculi which allow cut-elimination to a normal form with the
subformula property, i.e. every formula occurring in the proof is a subformula
of the end sequent. Proof schemata are a variant of LK-proofs able to simulate
induction by linking proofs together. There exists a schematic normal form
which has comparable proof theoretic behaviour to normal forms with the
subformula property. However, a calculus for the construction of proof schemata
does not exist. In this paper, we introduce a calculus for proof schemata and
prove soundness and completeness with respect to a fragment of the inductive
arguments formalizable in Peano arithmetic.Comment: 16 page
Automating Agential Reasoning: Proof-Calculi and Syntactic Decidability for STIT Logics
This work provides proof-search algorithms and automated counter-model extraction for a class of STIT logics. With this, we answer an open problem concerning syntactic decision procedures and cut-free calculi for STIT logics. A new class of cut-free complete labelled sequent calculi G3LdmL^m_n, for multi-agent STIT with at most n-many choices, is introduced. We refine the calculi G3LdmL^m_n through the use of propagation rules and demonstrate the admissibility of their structural rules, resulting in auxiliary calculi Ldm^m_nL. In the single-agent case, we show that the refined calculi Ldm^m_nL derive theorems within a restricted class of (forestlike) sequents, allowing us to provide proof-search algorithms that decide single-agent STIT logics. We prove that the proof-search algorithms are correct and terminate
Algebraic totality, towards completeness
Finiteness spaces constitute a categorical model of Linear Logic (LL) whose
objects can be seen as linearly topologised spaces, (a class of topological
vector spaces introduced by Lefschetz in 1942) and morphisms as continuous
linear maps. First, we recall definitions of finiteness spaces and describe
their basic properties deduced from the general theory of linearly topologised
spaces. Then we give an interpretation of LL based on linear algebra. Second,
thanks to separation properties, we can introduce an algebraic notion of
totality candidate in the framework of linearly topologised spaces: a totality
candidate is a closed affine subspace which does not contain 0. We show that
finiteness spaces with totality candidates constitute a model of classical LL.
Finally, we give a barycentric simply typed lambda-calculus, with booleans
and a conditional operator, which can be interpreted in this
model. We prove completeness at type for
every n by an algebraic method
- …